7 research outputs found

    Application of Deep Neural Network in Healthcare data

    Get PDF
    Biomedical data analysis has been playing an important role in healthcare provision services. For decades, medical practitioners and researchers have been extracting and analyse biomedical data to derive different health-related information. Recently, there has been a significant rise in the amount of biomedical data collection. This is due to the availability of biomedical devices for the extraction of biomedical data which are more portable, easy to use and affordable, as an effect technology advancement. As the amount of biomedical data produced every day increases, the risk of human making analytical and diagnostic mistakes also increases. For example, there are approximately 40 million diagnostic errors involving medical imaging annually worldwide, hence rise a need for the development of fast, accurate, reliable and automatic means for analysis of biomedical data. Conventional machine learning has been used to assist in the analysis and interpretation of biomedical data automatically, but always limited with the need for feature extraction process to train the built models. To achieve this, three studies have been conducted. Two studies were conducted by using EEG signals and one study by using microscopic images of cancer cells. In the first study with EEG signals, our method managed to interpret motor imaginary activities from a 64 channels EEG device with 99% classification accuracy when all the 64 channels were used and 91.5% classification when the number of channels was selected to eight (8) channels. In a second study which involved steady-state visual evoked potential form of EEG signals, our method achieved an average of 94% classification accuracy by using two channels, skin like EEG sensor. In the third study for authentication of cancer cell lines by using microscopic images, our method managed to attain an average of 0.91 F1-score in the authentication of eight classes of cancer cell lines. Studies reported in this thesis, significantly shows that CNN can play a major role in the development of a computerised way in the analysis of biomedical data. Towards provision of better healthcare by using CNN in analysis of different formats of biomedical data, this thesis has three major contributions, i) introduction of a new method for EEG channels selection towards development of portable EEG sensors for real-life application, and ii) introduction of a method for cancer cell lines authentication in the laboratory environment towards development of anti-cancer drugs, and iii) Introduction of a method for authentication of isogenic cancer cell lines

    Efficient Channel Selection Approach for Motor Imaginary Classification based on Convolutional Neural Network

    Get PDF
    Brain Computer Interface (BCI) may be the only way to communicate and control for disabled people. Someone's intention can be decoded from their brainwaves during motor imagery action. This can be used to help them control their environment without making any physical movement. To decode someone's intention from brainwaves during motor imagery activities, machine learning models trained on features extracted from the acquired EEG signals have been used. Although the technique has been successful, it has encountered several limitations and difficulties especially during feature extraction. Moreover, many current BCI systems rely on a large number of channels (e.g. 64) to capture spatial information which are necessary during training a machine learning model. In this study, Convolutional Neural Network (CNN) is used to decode five motor imagery intentions from EEG signals obtained from four subjects using 64 channels EEG device. A CNN model trained on raw EEG data managed to achieve a mean classification accuracy of 99.7%. Channel selection based on learned weights extracted from a trained CNN model has been performed with subsequent models trained on only two selected channels with higher weights attained a high accuracy (average of 98%) among three participants out of four

    Fully portable and wireless universal brain-machine interfaces enabled by flexible scalp electronics and deep-learning algorithm

    Get PDF
    Variation in human brains creates difficulty in implementing electroencephalography (EEG) into universal brain-machine interfaces (BMI). Conventional EEG systems typically suffer from motion artifacts, extensive preparation time, and bulky equipment, while existing EEG classification methods require training on a per-subject or per-session basis. Here, we introduce a fully portable, wireless, flexible scalp electronic system, incorporating a set of dry electrodes and flexible membrane circuit. Time domain analysis using convolutional neural networks allows for an accurate, real-time classification of steady-state visually evoked potentials on the occipital lobe. Simultaneous comparison of EEG signals with two commercial systems captures the improved performance of the flexible electronics with significant reduction of noise and electromagnetic interference. The two-channel scalp electronic system achieves a high information transfer rate (122.1 ± 3.53 bits per minute) with six human subjects, allowing for a wireless, real-time, universal EEG classification for an electronic wheelchair, motorized vehicle, and keyboard-less presentation

    Towards Image-based Cancer Cell Lines Authentication Using Deep Neural Networks

    Get PDF
    Although short tandem repeat (STR) analysis is available as a reliable method for the determination of the genetic origin of cell lines, the occurrence of misauthenticated cell lines remains an important issue. Reasons include the cost, effort and time associated with STR analysis. Moreover, there are currently no methods for the discrimination between isogenic cell lines (cell lines of the same genetic origin, e.g. different cell lines derived from the same organism, clonal sublines, sublines adapted to grow under certain conditions). Hence, additional complementary, ideally low-cost and low-effort methods are required that enable 1) the monitoring of cell line identity as part of the daily laboratory routine and 2) the authentication of isogenic cell lines. In this research, we automate the process of cell line identification by image-based analysis using deep convolutional neural networks. Two different convolutional neural networks models (MobileNet and InceptionResNet V2) were trained to automatically identify four parental cancer cell line (COLO 704, EFO-21, EFO-27 and UKF-NB-3) and their sublines adapted to the anti-cancer drugs cisplatin (COLO-704rCDDP1000, EFO-21rCDDP2000, EFO-27rCDDP2000) or oxaliplatin (UKF-NB-3rOXALI2000), hence resulting in an eight-class problem. Our best performing model, InceptionResNet V2, achieved an average of 0.91 F1-score on 10-fold cross validation with an average area under the curve (AUC) of 0.95, for the 8-class problem. Our best model also achieved an average F1-score of 0.94 and 0.96 on the authentication through a classification process of the four parental cell lines and the respective drug-adapted cells, respectively, on a four-class problem separately. These findings provide the basis for further development of the application of deep learning for the automation of cell line authentication into a readily available easy-to-use methodology that enables routine monitoring of the identity of cell lines including isogenic cell lines. It should be noted that, this is just a proof of principal that, images can also be used as a method for authentication of cancer cell lines and not a replacement for the STR method

    A Channel Selection Approach Based on Convolutional Neural Network for Multi-channel EEG Motor Imagery Decoding

    Get PDF
    For many disabled people, brain computer interface (BCI) may be the only way to communicate with others and to control things around them. Using motor imagery paradigm, one can decode an individual's intention by using their brainwaves to help them interact with their environment without having to make any physical movement. For decades, machine learning models, trained on features extracted from acquired electroencephalogram (EEG) signals have been used to decode motor imagery activities. This method has several limitations and constraints especially during feature extraction. Large number of channels on the current EEG devices make them hard to use in real-life as they are bulky, uncomfortable to wear, and takes lot of time in preparation. In this paper, we introduce a technique to perform channel selection using convolutional neural network (CNN) and to decode multiple classes of motor imagery intentions from four participants who are amputees. A CNN model trained on EEG data of 64 channels achieved a mean classification accuracy of 99.7% with five classes. Channel selection based on weights extracted from the trained model has been performed with subsequent models trained on eight selected channels achieved a reasonable accuracy of 91.5%. Training the model in time domain and frequency domain was also compared, different window sizes were experimented to test the possibilities of realtime application. Our method of channel selection was then evaluated on a publicly available motor imagery EEG dataset

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Full text link
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    A new technique for the prediction of heart failure risk driven by hierarchical neighborhood component-based learning and adaptive multi-layer networks

    No full text
    The recently evolving remote healthcare technology could potentially aid the realization of cost-effective and lasting solutions to life-threatening diseases such as heart failure. Such a remote healthcare system should integrate an effectual heart failure risk monitoring and prediction platform. However, developing a heart failure risk (HFR) prediction method that objectively incorporate individual contributive characteristics of HFR risk factors, that are required for adequate prediction remains a challenge. Towards addressing this research gap, a new approach driven by hierarchical neighborhood component-based-learning (HNCL) and adaptive multi-layer networks (AMLN) is proposed. In the proposed method, the HNCL module firstly learns the interrelations among the HFR attributes/ risk factors to construct a set of informative features, regarded as the global weight vector that reflects individual contribution of each risk factor. Subsequently, the constructed global weight vector is applied in building an AMLN model for the prediction of HFR. Moreover, the proposed method's performances were extensively validated with a benchmark clinical database of potential heart failure patients and compared with previous studies using prediction accuracies, performance plots, receiving operating characteristic analysis, error-histogram analysis, specificity, and sensitivity metrics. From the experimental results, we found that the proposed method (AMLN–HNCL)​ achieved significantly higher and stable predictions with an improvement of approximately 11.10% over the commonly applied method. Additionally, the proposed method recorded 9.09% and 12.48% improvements for specificity and sensitivity, respectively compared to the commonly applied method. The superiority in performances achieved by the proposed method should be because the interrelations amongst the risk factors were adequately learnt and their individual contribution was objectively accounted for in the prediction task. Thus, we believe that the proposed method could potentially facilitate the practical implementation of accurately robust HFR prediction module in the context of the currently emerging remote healthcare system, especially in Internet of Medical Things (IoMT) systems. Also, the method may be applied in wearable mobile health-care gadgets capable of monitoring the heart failure status in individuals
    corecore